38 research outputs found

    Noisy Importance Sampling Actor-Critic: An Off-Policy Actor-Critic With Experience Replay

    Get PDF
    This paper presents Noisy Importance Sampling Actor-Critic (NISAC), a set of empirically validated modifications to the advantage actor-critic algorithm (A2C), allowing off-policy reinforcement learning and increased performance. NISAC uses additive action space noise, aggressive truncation of importance sample weights, and large batch sizes. We see that additive noise drastically changes how off-sample experience is weighted for policy updates. The modified algorithm achieves an increase in convergence speed and sample efficiency compared to both the on-policy actor-critic A2C and the importance weighted off-policy actor-critic algorithm. In comparison to state-of-the-art (SOTA) methods, such as actor-critic with experience replay (ACER), NISAC nears the performance on several of the tested environments while training 40% faster and being significantly easier to implement. The effectiveness of NISAC is demonstrated against existing on-policy and off-policy actor-critic algorithms on a subset of the Atari domain

    Dynamic Planning Networks

    Get PDF
    We introduce Dynamic Planning Networks (DPN), a novel architecture for deep reinforcement learning, that combines model-based and model-free aspects for online planning. Our architecture learns to dynamically construct plans using a learned state-transition model by selecting and traversing between simulated states and actions to maximize information before acting. DPN learns to efficiently form plans by expanding a single action conditional state transition at a time instead of exhaustively evaluating each action, reducing the number of state-transitions used during planning. We observe emergent planning patterns in our agent, including classical search methods such as breadth-first and depth-first search. DPN shows improved performance over existing baselines across multiple axes

    Contextual Anomaly Detection in Big Sensor Data

    Get PDF
    Performing predictive modelling, such as anomaly detection, in Big Data is a difficult task. This problem is compounded as more and more sources of Big Data are generated from environmental sensors, logging applications, and the Internet of Things. Further, most current techniques for anomaly detection only consider the content of the data source, i.e. the data itself, without concern for the context of the data. As data becomes more complex it is increasingly important to bias anomaly detection techniques for the context, whether it is spatial, temporal, or semantic. The work proposed in this paper outlines a contextual anomaly detection technique for use in streaming sensor networks. The technique uses a well-defined content anomaly detection algorithm for real-time point anomaly detection. Additionally, we present a post-processing context aware anomaly detection algorithm based on sensor profiles, which are groups of contextually similar sensors generated by a multivariate clustering algorithm. Our proposed research has been implemented and evaluated with real-world data provided by Powersmiths, located in Brampton, Ontario, Canada

    A Unit Test Approach for Database Schema Evolution

    Get PDF
    Context: The constant changes in today’s business requirements demand continuous database revisions. Hence, database structures, not unlike software applications, deteriorate during their lifespan and thus require refactoring in order to achieve a longer life span. Although unit tests support changes to application programs and refactoring, there is currently a lack of testing strategies for database schema evolution. Objective: This work examines the challenges for database schema evolution and explores the possibility of using various testing strategies to assist with schema evolution. Specifically, the work proposes a novel unit test approach for the application code that accesses databases with the objective of proactively evaluating the code against the altered database. Method: The approach was validated through the implementation of a testing framework in conjunction with a sample application and a relatively simple database schema. Although the database schema in this study was simple, it was nevertheless able to demonstrate the advantages of the proposed approach. Results: After changes in the database schema, the proposed approach found all SELECT statements as well as the majority of other statements requiring modifications in the application code. Due to its efficiency with SELECT statements, the proposed approach is expected to be more successful with database warehouse applications where SELECT statements are dominant. Conclusion: The unit test approach that accesses databases has proven to be successful in evaluating the application code against the evolved database. In particular, the approach is simple and straightforward to implement, which makes it easily adoptable in practice

    Extension of Object-Oriented Metrics Suite for

    Get PDF
    Software developers require information to understand the characteristics of systems, such as complexity and maintainability. In order to further understand and determine characteristics of object-oriented (OO) systems, this paper describes research that identifies attributes that are valuable in determining the difficulty in implementing changes during maintenance, as well as the possible effects that such changes may produce. A set of metrics are proposed to quantify and measure these attributes. The proposed complexity metrics are used to determine the difficulty in implementing changes through the measurement of method complexity, method diversity, and complexity density. The paper establishes impact metrics to determine the potential effects of making changes to a class and dependence metrics that are used to measure the potential effects on a given class resulting from changes in other classes. The case study shows that the proposed metrics provide additional information not sufficiently provided by the related existing OO metrics. The metrics are also found to be useful in the investigation of large systems, correlating with project outcomes

    Deep neural network for load forecasting centred on architecture evolution

    Get PDF
    Nowadays, electricity demand forecasting is critical for electric utility companies. Accurate residential load forecasting plays an essential role as an individual component for integrated areas such as neighborhood load consumption. Short-term load forecasting can help electric utility companies reduce waste because electric power is expensive to store. This paper proposes a novel method to evolve deep neural networks for time series forecasting applied to residential load forecasting. The approach centres its efforts on the neural network architecture during the evolution. Then, the model weights are adjusted using an evolutionary optimization technique to tune the model performance automatically. Experimental results on a large dataset containing hourly load consumption of a residence in London, Ontario shows that the performance of unadjusted weights architecture is comparable to other state-of-the-art approaches. Furthermore, when the architecture weights are adjusted the model accuracy surpassed the state-of-the-art method called LSTM one shot by 3.0%

    ML4IoT: A Framework to Orchestrate Machine Learning Workflows on Internet of Things Data

    Get PDF
    Internet of Things (IoT) applications generate vast amounts of real-time data. Temporal analysis of these data series to discover behavioural patterns may lead to qualified knowledge affecting a broad range of industries. Hence, the use of machine learning (ML) algorithms over IoT data has the potential to improve safety, economy, and performance in critical processes. However, creating ML workflows at scale is a challenging task that depends upon both production and specialized skills. Such tasks require investigation, understanding, selection, and implementation of specific ML workflows, which often lead to bottlenecks, production issues, and code management complexity and even then may not have a final desirable outcome. This paper proposes the Machine Learning Framework for IoT data (ML4IoT), which is designed to orchestrate ML workflows, particularly on large volumes of data series. The ML4IoT framework enables the implementation of several types of ML models, each one with a different workflow. These models can be easily configured and used through a simple pipeline. ML4IoT has been designed to use container-based components to enable training and deployment of various ML models in parallel. The results obtained suggest that the proposed framework can manage real-world IoT heterogeneous data by providing elasticity, robustness, and performance

    Transfer Learning by Similarity Centred Architecture Evolution for Multiple Residential Load Forecasting

    Get PDF
    The development from traditional low voltage grids to smart systems has become extensive and adopted worldwide. Expanding the demand response program to cover the residential sector raises a wide range of challenges. Short term load forecasting for residential consumers in a neighbourhood could lead to a better understanding of low voltage consumption behaviour. Nevertheless, users with similar characteristics can present diversity in consumption patterns. Consequently, transfer learning methods have become a useful tool to tackle differences among residential time series. This paper proposes a method combining evolutionary algorithms for neural architecture search with transfer learning to perform short term load forecasting in a neighbourhood with multiple household load consumption. The approach centres its efforts on neural architecture search using evolutionary algorithms. The neural architecture evolution process retains the patterns of the centre-most house, and later the architecture weights are adjusted for each house in a multihouse set from a neighbourhood. In addition, a sensitivity analysis was conducted to ensure model performance. Experimental results on a large dataset containing hourly load consumption for ten houses in London, Ontario showed that the performance of the proposed approach performs better than the compared techniques. Moreover, the proposed method presents the average accuracy performance of 3.17 points higher than the state-of-the-art LSTM one shot method

    A Systematic Review of Convolutional Neural Network-Based Structural Condition Assessment Techniques

    Get PDF
    With recent advances in non-contact sensing technology such as cameras, unmanned aerial and ground vehicles, the structural health monitoring (SHM) community has witnessed a prominent growth in deep learning-based condition assessment techniques of structural systems. These deep learning methods rely primarily on convolutional neural networks (CNNs). The CNN networks are trained using a large number of datasets for various types of damage and anomaly detection and post-disaster reconnaissance. The trained networks are then utilized to analyze newer data to detect the type and severity of the damage, enhancing the capabilities of non-contact sensors in developing autonomous SHM systems. In recent years, a broad range of CNN architectures has been developed by researchers to accommodate the extent of lighting and weather conditions, the quality of images, the amount of background and foreground noise, and multiclass damage in the structures. This paper presents a detailed literature review of existing CNN-based techniques in the context of infrastructure monitoring and maintenance. The review is categorized into multiple classes depending on the specific application and development of CNNs applied to data obtained from a wide range of structures. The challenges and limitations of the existing literature are discussed in detail at the end, followed by a brief conclusion on potential future research directions of CNN in structural condition assessment

    Evaluation of Particle Swarm Optimization Applied to Grid Scheduling

    Get PDF
    The problem of scheduling independent users’ jobs to resources in Grid Computing systems is of paramount importance. This problem is known to be NP-hard, and many techniques have been proposed to solve it, such as heuristics, genetic algorithms (GA), and, more recently, particle swarm optimization (PSO). This article aims to use PSO to solve grid scheduling problems, and compare it with other techniques. It is shown that many often-overlooked implementation details can have a huge impact on the performance of the method. In addition, experiments also show that the PSO has a tendency to stagnate around local minima in high-dimensional input problems. Therefore, this work also proposes a novel hybrid PSO-GA method that aims to increase swarm diversity when a stagnation condition is detected. The method is evaluated and compared with other PSO formulations; the results show that the new method can successfully improve the scheduling solution
    corecore